DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.
Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve:
The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to identify projects most likely to need further review before approval.
The train.csv data set provided by DonorsChoose contains the following features:
| Feature | Description |
|---|---|
project_id |
A unique identifier for the proposed project. Example: p036502 |
project_title |
Title of the project. Examples:
|
project_grade_category |
Grade level of students for which the project is targeted. One of the following enumerated values:
|
project_subject_categories |
One or more (comma-separated) subject categories for the project from the following enumerated list of values:
Examples:
|
school_state |
State where school is located (Two-letter U.S. postal code). Example: WY |
project_subject_subcategories |
One or more (comma-separated) subject subcategories for the project. Examples:
|
project_resource_summary |
An explanation of the resources needed for the project. Example:
|
project_essay_1 |
First application essay* |
project_essay_2 |
Second application essay* |
project_essay_3 |
Third application essay* |
project_essay_4 |
Fourth application essay* |
project_submitted_datetime |
Datetime when project application was submitted. Example: 2016-04-28 12:43:56.245 |
teacher_id |
A unique identifier for the teacher of the proposed project. Example: bdf8baa8fedef6bfeec7ae4ff1c15c56 |
teacher_prefix |
Teacher's title. One of the following enumerated values:
|
teacher_number_of_previously_posted_projects |
Number of project applications previously submitted by the same teacher. Example: 2 |
* See the section Notes on the Essay Data for more details about these features.
Additionally, the resources.csv data set provides more data about the resources required for each project. Each line in this file represents a resource required by a project:
| Feature | Description |
|---|---|
id |
A project_id value from the train.csv file. Example: p036502 |
description |
Desciption of the resource. Example: Tenor Saxophone Reeds, Box of 25 |
quantity |
Quantity of the resource required. Example: 3 |
price |
Price of the resource required. Example: 9.95 |
Note: Many projects require multiple resources. The id value corresponds to a project_id in train.csv, so you use it as a key to retrieve all resources needed for a project:
The data set contains the following label (the value you will attempt to predict):
| Label | Description |
|---|---|
project_is_approved |
A binary flag indicating whether DonorsChoose approved the project. A value of 0 indicates the project was not approved, and a value of 1 indicates the project was approved. |
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from tqdm import tqdm
import os
from plotly import plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
from collections import Counter
project_data = pd.read_csv('train_data.csv')
resource_data = pd.read_csv('resources.csv')
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
print("Number of data points in train data", resource_data.shape)
print(resource_data.columns.values)
resource_data.head(2)
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
project_data = pd.merge(project_data, price_data, on='id', how='left')
project_data=project_data.sample(n=40000)
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
project_subject_categories¶catogories = list(project_data['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)
from collections import Counter
my_counter = Counter()
for word in project_data['clean_categories'].values:
my_counter.update(word.split())
cat_dict = dict(my_counter)
sorted_cat_dict = dict(sorted(cat_dict.items(), key=lambda kv: kv[1]))
project_subject_subcategories¶sub_catogories = list(project_data['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
my_counter = Counter()
for word in project_data['clean_subcategories'].values:
my_counter.update(word.split())
sub_cat_dict = dict(my_counter)
sorted_sub_cat_dict = dict(sorted(sub_cat_dict.items(), key=lambda kv: kv[1]))
# merge two column text dataframe:
project_data["essay"] = project_data["project_essay_1"].map(str) +\
project_data["project_essay_2"].map(str) + \
project_data["project_essay_3"].map(str) + \
project_data["project_essay_4"].map(str)
project_data.head(2)
#### 1.4.2.3 Using Pretrained Models: TFIDF weighted W2V
# printing some random reviews
print(project_data['essay'].values[0])
print("="*50)
print(project_data['essay'].values[150])
print("="*50)
print(project_data['essay'].values[1000])
print("="*50)
print(project_data['essay'].values[20000])
print("="*50)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent = decontracted(project_data['essay'].values[20000])
print(sent)
print("="*50)
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
print(sent)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"]
# Combining all the above stundents
from tqdm import tqdm
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['essay'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopwords)
preprocessed_essays.append(sent.lower().strip())
# after preprocesing
preprocessed_essays[20000]
project_data['clean_essays'] = preprocessed_essays
project_data.drop(['essay'], axis=1, inplace=True)
# similarly you can preprocess the titles also
from tqdm import tqdm
preprocessed_titles = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['project_title'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_titles.append(sent.lower().strip())
project_data['clean_project_titles'] = preprocessed_titles
project_data.drop(['project_title'], axis=1, inplace=True)
project_data.head(2)
#https://planspace.org/20150607-textblob_sentiment/
#https://stackoverflow.com/questions/43485469/apply-textblob-in-for-each-row-of-a-dataframe
def cal_sentiment_polarity(inputString):
try:
return TextBlob(inputString).sentiment.polarity
except:
return 0
project_data["essay_sentiment"]=project_data["clean_essays"].apply(cal_sentiment_polarity)
import re
def count_words(inputString):
try:
return len(re.findall(r'\w+', inputString))
except:
return None
project_data["title_word_count"]=project_data["clean_project_titles"].apply(count_words)
project_data["combine_essay_word_count"]=project_data["clean_essays"].apply(count_words)
project_data.columns
we are going to consider
- school_state : categorical data
- clean_categories : categorical data
- clean_subcategories : categorical data
- project_grade_category : categorical data
- teacher_prefix : categorical data
- project_title : text data
- text : text data
- project_resource_summary: text data (optinal)
- quantity : numerical (optinal)
- teacher_number_of_previously_posted_projects : numerical
- price : numerical
n_components) using elbow method
- The shape of the matrix after TruncatedSVD will be 2000*n, i.e. each row represents a vector form of the corresponding word.
- Vectorize the essay text and project titles using these word vectors. (while vectorizing, do ignore all the words which are not in top 2k words)
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
project_data['essay_and_title'] = project_data['clean_essays'] +' '+ project_data['clean_project_titles']
tfidf_model = TfidfVectorizer()
tfidf_model.fit_transform(project_data['essay_and_title'].values)
indices = np.argsort(tfidf_model.idf_)[::-1]
features = tfidf_model.get_feature_names()
top_n = 2000
top_words = [features[i] for i in indices[:top_n]]
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# make sure you featurize train and test data separatly
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
#https://datascience.stackexchange.com/questions/40038/how-to-implement-word-to-word-co-occurence-matrix-in-python
def Co_Occurance_Matrix(X_Text, Imp_Feat):
print(" Co_Occurance Matrix ")
# n X n matrix with initially value = 0.
array = np.array([[0 for x in range(Imp_Feat)] for x in range(Imp_Feat)])
df = pd.DataFrame(array, index=top_words, columns=top_words)
for sent in tqdm(project_data['essay_and_title'].values):
#Words splitting
words = sent.split(" ")
for word in range(len(words)):
# neigh range (1 to 5)
for neigh in range(1,6):
if(word + neigh < len(words) and words[word] != words[neigh]):
try:
#print("ram")
df.loc[words[word], words[neigh]] += 1
df.loc[words[neigh], words[word]] += 1
except:
pass
print(df.shape)
return df
import feather
if os.path.isfile('CoOccurance.feather'):
df=pd.read_feather('CoOccurance.feather')
else:
df=Co_Occurance_Matrix(project_data,top_n)
feather.write_dataframe(df, 'CoOccurance.feather')
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# make sure you featurize train and test data separatly
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
from sklearn.decomposition import TruncatedSVD
def SVD_Truncated(coo_matrix, no_of_comp):
global Max_svd
MaxVar = -1 # Max Explained varience
Max_svd = 0 # initially 0
#To get SVD with Max Explained varience
for n_comp in no_of_comp:
svd_matrix = TruncatedSVD(n_components=n_comp)
svd=svd_matrix.fit(coo_matrix)
exp_sum = svd.explained_variance_ratio_.sum()
if exp_sum > MaxVar :
Max_svd = svd
MaxVar = exp_sum
print("MaxExp==" ,MaxVar )
percentage_var_explained = Max_svd .explained_variance_ / np.sum( Max_svd .explained_variance_)
cum_var_explained = np.cumsum(percentage_var_explained)
# Plotting for MaxExp value in list_component
plt.plot(cum_var_explained , linewidth=2)
plt.grid()
plt.xlabel('n_components')
plt.ylabel('Cumulative_explained_variance')
plt.title("Cumulative_explained_variance VS n_components")
plt.show()
Max_svd=svd.transform(coo_matrix)
SVD_Truncated(df,[100,500,800,1000,1200,1500])
For n_components =100 the variance explained is maximum of 1.000%.So we can use n_components as 100 in further stages.
SVD_Truncated(df,[100])
#https://stackoverflow.com/questions/50915223/how-to-get-the-vector-representation-of-a-word-using-a-trained-svd-model
#https://www.analyticsvidhya.com/blog/2018/10/stepwise-guide-topic-modeling-latent-semantic-analysis/
#https://www.kaggle.com/dex314/tfidf-truncatedsvd-and-light-gbm
from sklearn import model_selection
X=project_data.drop(['project_is_approved'],axis=1)
y=project_data[['project_is_approved']]
X_tr, X_test, y_tr, y_test = model_selection.train_test_split(X, y, test_size=0.2, random_state=0,stratify=y)
print(X_tr.shape, y_tr.shape)
print(X_test.shape, y_test.shape)
tfidf = TfidfVectorizer(min_df=5, max_features=top_n)
tfidf.fit(X_tr['clean_essays'].values)
svdT = TruncatedSVD(n_components=100)
text_tfidf_tr = svdT.fit_transform(tfidf.transform(X_tr['clean_essays'].values))
text_tfidf_test= svdT.fit_transform(tfidf.transform(X_test['clean_essays'].values))
print(text_tfidf_tr.shape,text_tfidf_test.shape)
tfidf = TfidfVectorizer(min_df=5, max_features=top_n)
tfidf.fit(X_tr['clean_project_titles'].values)
svdT = TruncatedSVD(n_components=100)
title_tfidf_tr = svdT.fit_transform(tfidf.transform(X_tr['clean_project_titles'].values))
title_tfidf_test = svdT.fit_transform(tfidf.transform(X_test['clean_project_titles'].values))
print(title_tfidf_tr.shape,title_tfidf_test.shape)
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=10)
categories_one_hot_tr = vectorizer.fit_transform(X_tr['clean_categories'].values)
categories_one_hot_test = vectorizer.transform(X_test['clean_categories'].values)
school_state_one_hot_tr = vectorizer.fit_transform(X_tr['school_state'].values)
school_state_one_hot_test = vectorizer.transform(X_test['school_state'].values)
# you can do the similar thing with state, teacher_prefix and project_grade_category also
sub_categories_one_hot_tr = vectorizer.fit_transform(X_tr['clean_subcategories'].values)
sub_categories_one_hot_test = vectorizer.transform(X_test['clean_subcategories'].values)
teacher_prefix_one_hot_tr = vectorizer.fit_transform(X_tr['teacher_prefix'].values.astype('U'))
teacher_prefix_one_hot_test = vectorizer.transform(X_test['teacher_prefix'].values.astype('U'))
project_grade_category_one_hot_tr = vectorizer.fit_transform(X_tr['project_grade_category'].values)
project_grade_category_one_hot_test = vectorizer.transform(X_test['project_grade_category'].values)
from sklearn.preprocessing import StandardScaler
price_scalar = StandardScaler()
price_standardized_tr = price_scalar.fit_transform(X_tr['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
price_standardized_test = price_scalar.transform(X_test['price'].values.reshape(-1,1))
previously_posted_scalar = StandardScaler()
previously_posted_standardized_tr = previously_posted_scalar.fit_transform(X_tr['teacher_number_of_previously_posted_projects'].values.reshape(-1, 1))
previously_posted_standardized_test = previously_posted_scalar.transform(X_test['teacher_number_of_previously_posted_projects'].values.reshape(-1, 1))
quantity_scalar = StandardScaler()
quantity_standardized_tr = quantity_scalar.fit_transform(X_tr['quantity'].values.reshape(-1, 1))
quantity_standardized_test = quantity_scalar.transform(X_test['quantity'].values.reshape(-1, 1))
from scipy.sparse import hstack
set1_tr = hstack((categories_one_hot_tr,sub_categories_one_hot_tr,teacher_prefix_one_hot_tr,school_state_one_hot_tr,project_grade_category_one_hot_tr,price_standardized_tr,previously_posted_standardized_tr,quantity_standardized_tr,title_tfidf_tr,text_tfidf_tr))
set1_test = hstack((categories_one_hot_test,sub_categories_one_hot_test,teacher_prefix_one_hot_test,school_state_one_hot_test,project_grade_category_one_hot_test,price_standardized_test,previously_posted_standardized_test,quantity_standardized_test,title_tfidf_test,text_tfidf_test))
print(set1_tr.shape)
print(set1_test.shape)
import sys
import math
import numpy as np
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import roc_auc_score
# you might need to install this one
import xgboost as xgb
class XGBoostClassifier():
def __init__(self, num_boost_round=10, **params):
self.clf = None
self.num_boost_round = num_boost_round
self.params = params
self.params.update({'objective': 'multi:softprob'})
def fit(self, X, y, num_boost_round=None):
num_boost_round = num_boost_round or self.num_boost_round
self.label2num = {label: i for i, label in enumerate(sorted(set(y)))}
dtrain = xgb.DMatrix(X, label=[self.label2num[label] for label in y])
self.clf = xgb.train(params=self.params, dtrain=dtrain, num_boost_round=num_boost_round, verbose_eval=1)
def predict(self, X):
num2label = {i: label for label, i in self.label2num.items()}
Y = self.predict_proba(X)
y = np.argmax(Y, axis=1)
return np.array([num2label[i] for i in y])
def predict_proba(self, X):
dtest = xgb.DMatrix(X)
return self.clf.predict(dtest)
def score(self, X, y):
Y = self.predict_proba(X)[:,1]
return roc_auc_score(y, Y)
def get_params(self, deep=True):
return self.params
def set_params(self, **params):
if 'num_boost_round' in params:
self.num_boost_round = params.pop('num_boost_round')
if 'objective' in params:
del params['objective']
self.params.update(params)
return self
clf = XGBoostClassifier(eval_metric = 'auc', num_class = 2, nthread = 4,)
###################################################################
# Change from here #
###################################################################
parameters = {
'num_boost_round': [100, 250, 500],
'eta': [0.05, 0.1, 0.3],
'max_depth': [6, 9, 12],
'subsample': [0.9, 1.0],
'colsample_bytree': [0.9, 1.0],
}
clf = GridSearchCV(clf, parameters)
X = np.array([[1,2], [3,4], [2,1], [4,3], [1,0], [4,5]])
Y = np.array([0, 1, 0, 1, 0, 1])
clf.fit(X, Y)
# print(clf.grid_scores_)
best_parameters, score, _ = max(clf.grid_scores_, key=lambda x: x[1])
print('score:', score)
for param_name in sorted(best_parameters.keys()):
print("%s: %r" % (param_name, best_parameters[param_name]))
import xgboost as xgb
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import confusion_matrix
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
# No need to split the data into train and test(cv)
# use the Dmatrix and apply xgboost on the whole data
# please check the Quora case study notebook as reference
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
xgb_model = xgb.XGBClassifier(class_weight='balanced',n_jobs=-1)
parameters = [{'n_estimators': [10,20,50,70],'max_depth': [4,5,6,7]}]
grid_search = GridSearchCV(xgb_model, parameters, cv=5, scoring='roc_auc')
grid_search.fit(set1_tr, y_tr.values.ravel())
scores_train = grid_search.cv_results_['mean_train_score'].reshape(len(parameters[0]['n_estimators']),len(parameters[0]['max_depth']))
scores_cv = grid_search.cv_results_['mean_test_score'].reshape(len(parameters[0]['n_estimators']),len(parameters[0]['max_depth']))
trace1 = go.Scatter3d(x=grid_search.cv_results_['param_n_estimators'],y=grid_search.cv_results_['param_max_depth'],z=scores_train.ravel(), name = 'train')
trace2 = go.Scatter3d(x=grid_search.cv_results_['param_n_estimators'],y=grid_search.cv_results_['param_max_depth'],z=scores_cv.ravel(), name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
xg_best_params={'max_depth': 7, 'n_estimators':70}
xgb_model = xgb.XGBClassifier(max_depth=xg_best_params['max_depth'],n_estimators=xg_best_params['n_estimators'],class_weight='balanced',n_jobs=-1)
xgb_model.fit(set1_tr, np.ravel(y_tr,order='C'))
y_train_pred=[]
y_test_pred=[]
for j in range(0, set1_tr.shape[0], 1000):
y_train_pred.extend(xgb_model.predict_proba(set1_tr.tocsr()[j:j+1000])[:,1])
for j in range(0, set1_test.shape[0], 1000):
y_test_pred.extend(xgb_model.predict_proba(set1_test.tocsr()[j:j+1000])[:,1])
train_fpr, train_tpr, thresholds = roc_curve(y_tr, y_train_pred)
test_fpr, test_tpr, thresholds = roc_curve(y_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
xg_train_auc=auc(train_fpr, train_tpr)
xg_test_auc=auc(test_fpr, test_tpr)
plt.legend()
plt.xlabel("False Positive Range(FPR)")
plt.ylabel("True Positive Range(TPR)")
plt.title("AUC PLOTS")
plt.show()